19 research outputs found

    Face Recognition Based on Texture Descriptors

    Get PDF
    In this chapter, the performance of different texture descriptor algorithms used in face feature extraction tasks are analyzed. These commonly used algorithms to extract texture characteristics from images, with quite good results in this task, are also expected to provide fairly good results when used to characterize the face in an image. To perform the testing task, an AR face database, which is a standard database that contains images of 120 people, was used, including 70 images with different facial expressions and 30 with sunglasses, and all of them with different illumination intensity. To train the recognition system from one to seven images were used for each person. Different classifiers like Euclidean distance, cosine distance, and support vector machine (SVM) were also used, and the results obtained were higher than 98% for classification, achieving a good performance in verification task. This chapter was also compared with other schemes, showing the effectiveness of all of them

    Early Fire Detection on Video Using LBP and Spread Ascending of Smoke

    Get PDF
    This paper proposes a methodology for early fire detection based on visual smoke characteristics such as movement, color, gray tones and dynamic texture, i.e., diverse but representative and discriminant characteristics, as well as its ascending expansion, which is sequentially processed to find the candidate smoke regions. Thus, once a region with movement is detected, the pixels inside it that are smoke color are estimated to obtain a more detailed description of the smoke candidate region. Next, to increase the system efficiency and reduce false alarms, each region is characterized using the local binary pattern, which analyzes its texture and classifies it by means of a multi-layer perceptron. Finally, the ascending expansion of the candidate region is analyzed and those smoke regions that maintain or increase their ascending growth over a time span are considered as a smoke regions, and an alarm is triggered. Evaluations were performed using two different classifiers, namely multi-Layer perceptron and the support vector machine, with a standard database smoke video. Evaluation results show that the proposed system provides fire detection accuracy of between 97.85% and 99.83%

    Reconocimiento dinámico y estático de trazos

    No full text

    Face Recognition based Only on Eyes' Information and Local Binary Pattern

    No full text
    Abstract. In this paper the implementation of the Local Binary Pattern algorithm for face recognition is presented using the partial information of the face, the main contribution of this work is that segmenting the parts of face (forehead, eyes, mouth) can make the recognition a person using only their eyes and getting a percentage of up to 69%, which considering the limited information provided a good success rate is obtained. In the test phase AR facedatabase it was used and using the method of Viola Jones face is located and segmented to obtain templates for each person and each part of his face and Euclidean distance was used for classification task. Because in a real application do not always have all the face of the person to identify the proposed system shows that you can get good results with partial information about it, in addition the results show that in the ranking 6 always provided the right person, which is also useful in real applications

    Using Twitter Data to Monitor Natural Disaster Social Dynamics: A Recurrent Neural Network Approach with Word Embeddings and Kernel Density Estimation

    Get PDF
    In recent years, Online Social Networks (OSNs) have received a great deal of attention for their potential use in the spatial and temporal modeling of events owing to the information that can be extracted from these platforms. Within this context, one of the most latent applications is the monitoring of natural disasters. Vital information posted by OSN users can contribute to relief efforts during and after a catastrophe. Although it is possible to retrieve data from OSNs using embedded geographic information provided by GPS systems, this feature is disabled by default in most cases. An alternative solution is to geoparse specific locations using language models based on Named Entity Recognition (NER) techniques. In this work, a sensor that uses Twitter is proposed to monitor natural disasters. The approach is intended to sense data by detecting toponyms (named places written within the text) in tweets with event-related information, e.g., a collapsed building on a specific avenue or the location at which a person was last seen. The proposed approach is carried out by transforming tokenized tweets into word embeddings: a rich linguistic and contextual vector representation of textual corpora. Pre-labeled word embeddings are employed to train a Recurrent Neural Network variant, known as a Bidirectional Long Short-Term Memory (biLSTM) network, that is capable of dealing with sequential data by analyzing information in both directions of a word (past and future entries). Moreover, a Conditional Random Field (CRF) output layer, which aims to maximize the transition from one NER tag to another, is used to increase the classification accuracy. The resulting labeled words are joined to coherently form a toponym, which is geocoded and scored by a Kernel Density Estimation function. At the end of the process, the scored data are presented graphically to depict areas in which the majority of tweets reporting topics related to a natural disaster are concentrated. A case study on Mexico’s 2017 Earthquake is presented, and the data extracted during and after the event are reported
    corecore